41 research outputs found

    IDENTIFICATION AND QUANTIFICATION OF VARIABILITY MEASURES AFFECTING CODE REUSABILITY IN OPEN SOURCE ENVIRONMENT

    Get PDF
    Open source software (OSS) is one of the emerging areas in software engineering, and is gaining the interest of the software development community. OSS was started as a movement, and for many years software developers contributed to it as their hobby (non commercial purpose). Now, OSS components are being reused in CBSD (commercial purpose). However, recently, the use of OSS in SPL is envisioned recently by software engineering researchers, thus bringing it into a new arena. Being an emerging research area, it demands exploratory study to explore the dimensions of this phenomenon. Furthermore, there is a need to assess the reusability of OSS which is the focal point of these disciplines (CBSE, SPL, and OSS). In this research, a mixed method based approach is employed which is specifically 'partially mixed sequential dominant study'. It involves both qualitative (interviews) and quantitative phases (survey and experiment). During the qualitative phase seven respondents were involved, sample size of survey was 396, and three experiments were conducted. The main contribution of this study is results of exploration of the phenomenon 'reuse of OSS in reuse intensive software development'. The findings include 7 categories and 39 dimensions. One of the dimension factors affecting reusability was carried to the quantitative phase (survey and experiment). On basis of the findings, proposal for reusability attribute model was presented at class and package level. Variability is one of the newly identified attribute of reusability. A comprehensive theoretical analysis of variability implementation mechanisms is conducted to propose metrics for its assessment. The reusability attribute model is validated by statistical analysis of I 03 classes and 77 packages. An evolutionary reusability analysis of two open source software was conducted, where different versions of software are analyzed for their reusability. The results show a positive correlation between variability and reusability at package level and validate the other identified attributes. The results would be helpful to conduct further studies in this area

    Exploring Deep Learning Techniques for Glaucoma Detection: A Comprehensive Review

    Full text link
    Glaucoma is one of the primary causes of vision loss around the world, necessitating accurate and efficient detection methods. Traditional manual detection approaches have limitations in terms of cost, time, and subjectivity. Recent developments in deep learning approaches demonstrate potential in automating glaucoma detection by detecting relevant features from retinal fundus images. This article provides a comprehensive overview of cutting-edge deep learning methods used for the segmentation, classification, and detection of glaucoma. By analyzing recent studies, the effectiveness and limitations of these techniques are evaluated, key findings are highlighted, and potential areas for further research are identified. The use of deep learning algorithms may significantly improve the efficacy, usefulness, and accuracy of glaucoma detection. The findings from this research contribute to the ongoing advancements in automated glaucoma detection and have implications for improving patient outcomes and reducing the global burden of glaucoma

    IDENTIFICATION AND QUANTIFICATION OF VARIABILITY MEASURES AFFECTING CODE REUSABILITY IN OPEN SOURCE ENVIRONMENT

    Get PDF
    Open source software (OSS) is one of the emerging areas in software engineering, and is gaining the interest of the software development community. OSS was started as a movement, and for many years software developers contributed to it as their hobby (non commercial purpose). Now, OSS components are being reused in CBSD (commercial purpose). However, recently, the use of OSS in SPL is envisioned recently by software engineering researchers, thus bringing it into a new arena. Being an emerging research area, it demands exploratory study to explore the dimensions of this phenomenon. Furthermore, there is a need to assess the reusability of OSS which is the focal point of these disciplines (CBSE, SPL, and OSS). In this research, a mixed method based approach is employed which is specifically 'partially mixed sequential dominant study'. It involves both qualitative (interviews) and quantitative phases (survey and experiment). During the qualitative phase seven respondents were involved, sample size of survey was 396, and three experiments were conducted. The main contribution of this study is results of exploration of the phenomenon 'reuse of OSS in reuse intensive software development'. The findings include 7 categories and 39 dimensions. One of the dimension factors affecting reusability was carried to the quantitative phase (survey and experiment). On basis of the findings, proposal for reusability attribute model was presented at class and package level. Variability is one of the newly identified attribute of reusability. A comprehensive theoretical analysis of variability implementation mechanisms is conducted to propose metrics for its assessment. The reusability attribute model is validated by statistical analysis of I 03 classes and 77 packages. An evolutionary reusability analysis of two open source software was conducted, where different versions of software are analyzed for their reusability. The results show a positive correlation between variability and reusability at package level and validate the other identified attributes. The results would be helpful to conduct further studies in this area

    A Weighted Linear Combining Scheme for Cooperative Spectrum Sensing

    Get PDF
    AbstractCooperative spectrum sensing exploits spatial diversity of secondary-users (SUs), to reliably detect the availability of a spectrum. Soft energy combining schemes have optimal detection performance at the cost of high cooperation overhead, since actual sensed data is required at the fusion center. To reduce cooperation overhead, in hard combining only local decisions are shared; however the detection performance is suboptimal due to the loss of information. In this paper, a weighted linear combining scheme is proposed in which a SU performs a local sensing test based on two threshold levels. If local test result lies between the two thresholds then the SU report neither its local decision nor sequentially estimated unknown SNR parameter values, to the fusion center. Thereby, uncertain decisions about the presence/absence of the primary-user signal are suppressed. Simulation results suggest that the detection performance of the proposed scheme is close to optimal soft combining schemes yet its overhead is similar to hard combining techniques

    Resource optimization‐based software risk reduction model for large‐scale application development

    Get PDF
    Software risks are a common phenomenon in the software development lifecycle, and risks emerge into larger problems if they are not dealt with on time. Software risk management is a strategy that focuses on the identification, management, and mitigation of the risk factors in the software development lifecycle. The management itself depends on the nature, size, and skill of the project under consideration. This paper proposes a model that deals with identifying and dealing with the risk factors by introducing different observatory and participatory project factors. It is as-sumed that most of the risk factors can be dealt with by doing effective business processing that in response deals with the orientation of risks and elimination or reduction of those risk factors that emerge over time. The model proposes different combinations of resource allocation that can help us conclude a software project with an extended amount of acceptability. This paper presents a Risk Reduction Model, which effectively handles the application development risks. The model can syn-chronize its working with medium to large‐scale software projects. The reduction in software failures positively affects the software development environment, and the software failures shall re-duce consequently. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    A Novel Fragile Zero-Watermarking Algorithm for Digital Medical Images

    Get PDF
    The wireless transmission of patients’ particulars and medical data to a specialised centre after an initial screening at a remote health facility may cause potential threats to patients’ data privacy and integrity. Although watermarking can be used to rectify such risks, it should not degrade the medical data, because any change in the data characteristics may lead to a false diagnosis. Hence, zero watermarking can be helpful in these circumstances. At the same time, the transmitted data must create a warning in case of tampering or a malicious attack. Thus, watermarking should be fragile in nature. Consequently, a novel hybrid approach using fragile zero watermarking is proposed in this study. Visual cryptography and chaotic randomness are major components of the proposed algorithm to avoid any breach of information through an illegitimate attempt. The proposed algorithm is evaluated using two datasets: the Digital Database for Screening Mammography and the Mini Mammographic Image Analysis Society database. In addition, a breast cancer detection system using a convolutional neural network is implemented to analyse the diagnosis in case of a malicious attack and after watermark insertion. The experimental results indicate that the proposed algorithm is reliable for privacy protection and data authentication

    Genome-wide association identifies nine common variants associated with fasting proinsulin levels and provides new insights into the pathophysiology of type 2 diabetes.

    Get PDF
    OBJECTIVE: Proinsulin is a precursor of mature insulin and C-peptide. Higher circulating proinsulin levels are associated with impaired β-cell function, raised glucose levels, insulin resistance, and type 2 diabetes (T2D). Studies of the insulin processing pathway could provide new insights about T2D pathophysiology. RESEARCH DESIGN AND METHODS: We have conducted a meta-analysis of genome-wide association tests of ∼2.5 million genotyped or imputed single nucleotide polymorphisms (SNPs) and fasting proinsulin levels in 10,701 nondiabetic adults of European ancestry, with follow-up of 23 loci in up to 16,378 individuals, using additive genetic models adjusted for age, sex, fasting insulin, and study-specific covariates. RESULTS: Nine SNPs at eight loci were associated with proinsulin levels (P < 5 × 10(-8)). Two loci (LARP6 and SGSM2) have not been previously related to metabolic traits, one (MADD) has been associated with fasting glucose, one (PCSK1) has been implicated in obesity, and four (TCF7L2, SLC30A8, VPS13C/C2CD4A/B, and ARAP1, formerly CENTD2) increase T2D risk. The proinsulin-raising allele of ARAP1 was associated with a lower fasting glucose (P = 1.7 × 10(-4)), improved β-cell function (P = 1.1 × 10(-5)), and lower risk of T2D (odds ratio 0.88; P = 7.8 × 10(-6)). Notably, PCSK1 encodes the protein prohormone convertase 1/3, the first enzyme in the insulin processing pathway. A genotype score composed of the nine proinsulin-raising alleles was not associated with coronary disease in two large case-control datasets. CONCLUSIONS: We have identified nine genetic variants associated with fasting proinsulin. Our findings illuminate the biology underlying glucose homeostasis and T2D development in humans and argue against a direct role of proinsulin in coronary artery disease pathogenesis

    An Efficient Method for Breast Mass Classification Using Pre-Trained Deep Convolutional Networks

    No full text
    Masses are the early indicators of breast cancer, and distinguishing between benign and malignant masses is a challenging problem. Many machine learning- and deep learning-based methods have been proposed to distinguish benign masses from malignant ones on mammograms. However, their performance is not satisfactory. Though deep learning has been shown to be effective in a variety of applications, it is challenging to apply it for mass classification since it requires a large dataset for training and the number of available annotated mammograms is limited. A common approach to overcome this issue is to employ a pre-trained model and fine-tune it on mammograms. Though this works well, it still involves fine-tuning a huge number of learnable parameters with a small number of annotated mammograms. To tackle the small set problem in the training or fine-tuning of CNN models, we introduce a new method, which uses a pre-trained CNN without any modifications as an end-to-end model for mass classification, without fine-tuning the learnable parameters. The training phase only identifies the neurons in the classification layer, which yield higher activation for each class, and later on uses the activation of these neurons to classify an unknown mass ROI. We evaluated the proposed approach using different CNN models on the public domain benchmark datasets, such as DDSM and INbreast. The results show that it outperforms the state-of-the-art deep learning-based methods

    Improving the security in healthcare information system through elman neural network based classifier

    No full text
    Intrusions are critical issues in information system of healthcare sector because a sole intrusion can cause health issue due to any manipulation in the medical record of the patients. Several intrusion detection (ID) techniques have been used but their performance is the dilemma. The efficiency of intrusion detection systems (IDSs) depends on optimal classifier architecture to categorize the data into intrusive or normal, which required increasing detection rates (DR) and decreasing false alarm rates (FAR). Therefore, to find an optimal classifier architecture to enhance performance in IDSs is an important subject. This study proposed Elman Neural Network-based IDS as classification technique in order to enhance performance. NSL-KDD Dataset is used for evaluation and assessment. Moreover, Principle Component Analysis (PCA) is applied in this work in order to convert raw features to principal space and choose features based on their sensitivity. The proposed approach is capable to enhance performance by increased DR and decreased FAR. © 2017 American Scientific Publishers All rights reserved
    corecore